基于签名的技术使数学洞察力洞悉不断发展的数据的复杂流之间的相互作用。这些见解可以自然地转化为理解流数据的数值方法,也许是由于它们的数学精度,已被证明在数据不规则而不是固定的情况下分析流的数据以及数据和数据的尺寸很有用样本量均为中等。了解流的多模式数据是指数的:$ d $ d $的字母中的$ n $字母中的一个单词可以是$ d^n $消息之一。签名消除了通过采样不规则性引起的指数级噪声,但仍然存在指数量的信息。这项调查旨在留在可以直接管理指数缩放的域中。在许多问题中,可伸缩性问题是一个重要的挑战,但需要另一篇调查文章和进一步的想法。这项调查描述了一系列环境集足够小以消除大规模机器学习的可能性,并且可以有效地使用一小部分免费上下文和原则性功能。工具的数学性质可以使他们对非数学家的使用恐吓。本文中介绍的示例旨在弥合此通信差距,并提供从机器学习环境中绘制的可进行的工作示例。笔记本可以在线提供这些示例中的一些。这项调查是基于伊利亚·雪佛兰(Ilya Chevryev)和安德烈·科米利津(Andrey Kormilitzin)的早期论文,它们在这种机械开发的较早时刻大致相似。本文说明了签名提供的理论见解是如何在对应用程序数据的分析中简单地实现的,这种方式在很大程度上对数据类型不可知。
translated by 谷歌翻译
本文有助于识别基于骨架的人类行动认可。关键步骤是开发一种通用网络架构,以提取用于时空骨架数据的判别特征。在本文中,我们提出了一种新型模块,即Logsig-RNN,其是日志签名层和复发类型神经网络(RNN)的组合。前者来自数学上的签名技术和记录签名作为流数据的表示,可以管理高采样率流,非均匀采样和变量长度的时间序列。它用作复发层的增强,可以方便地插入神经网络。此外,我们提出了两个路径转换层,以显着降低路径尺寸,同时保留进入Logsig-RNN模块的基本信息。最后,数值结果表明,在SOTA网络中通过LOGSIG-RNN模块替换RNN模块一致地提高了在精度和鲁棒性方面的Chalearn手势数据和NTU RGB + D 120动作数据上的性能。特别是,我们通过将简单的路径转换层与Logsig-RNN组合来实现Chalearn2013手势数据的最先进的准确性。代码可在https://github.com/steveliao93/gcn_logsigrnn获得。
translated by 谷歌翻译
随机过程是随机变量,其中一些路径中的值。然而,将随机过程降低到路径值随机变量忽略其过滤,即通过时间通过该过程携带的信息流。通过调节其过滤过程,我们介绍了一系列高阶内核eMbeddings(KMES),概括了KME的概念,并捕获了与过滤有关的附加信息。我们导出了相关的高阶最大均衡(MMD)的经验估计器,并证明了一致性。然后,我们构建一个过滤敏感的内核两种样本测试,能够拾取标准MMD测试错过的信息。此外,利用我们的更高阶MMDS,我们在随机过程中构建了一个通用内核的家庭,允许通过经典内核的回归方法解决现实世界校准和最佳停止问题(例如美国选项的定价)。最后,调整对随机过程的情况的条件独立性的现有测试,我们设计了一种因果发现算法,以恢复与其多维轨迹的观察相互作用的结构依赖性的因果关系。
translated by 谷歌翻译
Transforming off-the-shelf deep neural network (DNN) models into dynamic multi-exit architectures can achieve inference and transmission efficiency by fragmenting and distributing a large DNN model in edge computing scenarios (e.g., edge devices and cloud servers). In this paper, we propose a novel backdoor attack specifically on the dynamic multi-exit DNN models. Particularly, we inject a backdoor by poisoning one DNN model's shallow hidden layers targeting not this vanilla DNN model but only its dynamically deployed multi-exit architectures. Our backdoored vanilla model behaves normally on performance and cannot be activated even with the correct trigger. However, the backdoor will be activated when the victims acquire this model and transform it into a dynamic multi-exit architecture at their deployment. We conduct extensive experiments to prove the effectiveness of our attack on three structures (ResNet-56, VGG-16, and MobileNet) with four datasets (CIFAR-10, SVHN, GTSRB, and Tiny-ImageNet) and our backdoor is stealthy to evade multiple state-of-the-art backdoor detection or removal methods.
translated by 谷歌翻译
The General Associative Memory Model (GAMM) has a constant state-dependant energy surface that leads the output dynamics to fixed points, retrieving single memories from a collection of memories that can be asynchronously preloaded. We introduce a new class of General Sequential Episodic Memory Models (GSEMM) that, in the adiabatic limit, exhibit temporally changing energy surface, leading to a series of meta-stable states that are sequential episodic memories. The dynamic energy surface is enabled by newly introduced asymmetric synapses with signal propagation delays in the network's hidden layer. We study the theoretical and empirical properties of two memory models from the GSEMM class, differing in their activation functions. LISEM has non-linearities in the feature layer, whereas DSEM has non-linearity in the hidden layer. In principle, DSEM has a storage capacity that grows exponentially with the number of neurons in the network. We introduce a learning rule for the synapses based on the energy minimization principle and show it can learn single memories and their sequential relationships online. This rule is similar to the Hebbian learning algorithm and Spike-Timing Dependent Plasticity (STDP), which describe conditions under which synapses between neurons change strength. Thus, GSEMM combines the static and dynamic properties of episodic memory under a single theoretical framework and bridges neuroscience, machine learning, and artificial intelligence.
translated by 谷歌翻译
We examined multiple deep neural network (DNN) architectures for suitability in predicting neurotransmitter concentrations from labeled in vitro fast scan cyclic voltammetry (FSCV) data collected on carbon fiber electrodes. Suitability is determined by the predictive performance in the "out-of-probe" case, the response to artificially induced electrical noise, and the ability to predict when the model will be errant for a given probe. This work extends prior comparisons of time series classification models by focusing on this specific task. It extends previous applications of machine learning to FSCV task by using a much larger data set and by incorporating recent advancements in deep neural networks. The InceptionTime architecture, a deep convolutional neural network, has the best absolute predictive performance of the models tested but was more susceptible to noise. A naive multilayer perceptron architecture had the second lowest prediction error and was less affected by the artificial noise, suggesting that convolutions may not be as important for this task as one might suspect.
translated by 谷歌翻译
Media has a substantial impact on the public perception of events. A one-sided or polarizing perspective on any topic is usually described as media bias. One of the ways how bias in news articles can be introduced is by altering word choice. Biased word choices are not always obvious, nor do they exhibit high context-dependency. Hence, detecting bias is often difficult. We propose a Transformer-based deep learning architecture trained via Multi-Task Learning using six bias-related data sets to tackle the media bias detection problem. Our best-performing implementation achieves a macro $F_{1}$ of 0.776, a performance boost of 3\% compared to our baseline, outperforming existing methods. Our results indicate Multi-Task Learning as a promising alternative to improve existing baseline models in identifying slanted reporting.
translated by 谷歌翻译
Human activity recognition (HAR) using IMU sensors, namely accelerometer and gyroscope, has several applications in smart homes, healthcare and human-machine interface systems. In practice, the IMU-based HAR system is expected to encounter variations in measurement due to sensor degradation, alien environment or sensor noise and will be subjected to unknown activities. In view of practical deployment of the solution, analysis of statistical confidence over the activity class score are important metrics. In this paper, we therefore propose XAI-BayesHAR, an integrated Bayesian framework, that improves the overall activity classification accuracy of IMU-based HAR solutions by recursively tracking the feature embedding vector and its associated uncertainty via Kalman filter. Additionally, XAI-BayesHAR acts as an out of data distribution (OOD) detector using the predictive uncertainty which help to evaluate and detect alien input data distribution. Furthermore, Shapley value-based performance of the proposed framework is also evaluated to understand the importance of the feature embedding vector and accordingly used for model compression
translated by 谷歌翻译
Despite the recent success of multi-task learning and pre-finetuning for natural language understanding, few works have studied the effects of task families on abstractive text summarization. Task families are a form of task grouping during the pre-finetuning stage to learn common skills, such as reading comprehension. To close this gap, we analyze the influence of multi-task learning strategies using task families for the English abstractive text summarization task. We group tasks into one of three strategies, i.e., sequential, simultaneous, and continual multi-task learning, and evaluate trained models through two downstream tasks. We find that certain combinations of task families (e.g., advanced reading comprehension and natural language inference) positively impact downstream performance. Further, we find that choice and combinations of task families influence downstream performance more than the training scheme, supporting the use of task families for abstractive text summarization.
translated by 谷歌翻译
The recent success of large language models for text generation poses a severe threat to academic integrity, as plagiarists can generate realistic paraphrases indistinguishable from original work. However, the role of large autoregressive transformers in generating machine-paraphrased plagiarism and their detection is still developing in the literature. This work explores T5 and GPT-3 for machine-paraphrase generation on scientific articles from arXiv, student theses, and Wikipedia. We evaluate the detection performance of six automated solutions and one commercial plagiarism detection software and perform a human study with 105 participants regarding their detection performance and the quality of generated examples. Our results suggest that large models can rewrite text humans have difficulty identifying as machine-paraphrased (53% mean acc.). Human experts rate the quality of paraphrases generated by GPT-3 as high as original texts (clarity 4.0/5, fluency 4.2/5, coherence 3.8/5). The best-performing detection model (GPT-3) achieves a 66% F1-score in detecting paraphrases.
translated by 谷歌翻译